16 research outputs found
Machine learning approach for segmenting glands in colon histology images using local intensity and texture features
Colon Cancer is one of the most common types of cancer. The treatment is
planned to depend on the grade or stage of cancer. One of the preconditions for
grading of colon cancer is to segment the glandular structures of tissues.
Manual segmentation method is very time-consuming, and it leads to life risk
for the patients. The principal objective of this project is to assist the
pathologist to accurate detection of colon cancer. In this paper, the authors
have proposed an algorithm for an automatic segmentation of glands in colon
histology using local intensity and texture features. Here the dataset images
are cropped into patches with different window sizes and taken the intensity of
those patches, and also calculated texture-based features. Random forest
classifier has been used to classify this patch into different labels. A
multilevel random forest technique in a hierarchical way is proposed. This
solution is fast, accurate and it is very much applicable in a clinical setup
Exploration of Interpretability Techniques for Deep COVID-19 Classification using Chest X-ray Images
The outbreak of COVID-19 has shocked the entire world with its fairly rapid
spread and has challenged different sectors. One of the most effective ways to
limit its spread is the early and accurate diagnosis of infected patients.
Medical imaging such as X-ray and Computed Tomography (CT) combined with the
potential of Artificial Intelligence (AI) plays an essential role in supporting
the medical staff in the diagnosis process. Thereby, the use of five different
deep learning models (ResNet18, ResNet34, InceptionV3, InceptionResNetV2, and
DenseNet161) and their Ensemble have been used in this paper, to classify
COVID-19, pneumoni{\ae} and healthy subjects using Chest X-Ray. Multi-label
classification was performed to predict multiple pathologies for each patient,
if present. Foremost, the interpretability of each of the networks was
thoroughly studied using techniques like occlusion, saliency, input X gradient,
guided backpropagation, integrated gradients, and DeepLIFT. The mean Micro-F1
score of the models for COVID-19 classifications ranges from 0.66 to 0.875, and
is 0.89 for the Ensemble of the network models. The qualitative results
depicted the ResNets to be the most interpretable model
DS6, Deformation-aware Semi-supervised Learning: Application to Small Vessel Segmentation with Noisy Training Data
Blood vessels of the brain are providing the human brain with the required
nutrients and oxygen. As a vulnerable part of the cerebral blood supply,
pathology of small vessels can cause serious problems such as Cerebral Small
Vessel Diseases (CSVD). It has also been shown that CSVD is related to
neurodegeneration, such as in Alzheimer's disease. With the advancement of 7
Tesla MRI systems, higher spatial image resolution can be achieved, enabling
the depiction of very small vessels in the brain. Non-Deep Learning based
approaches for vessel segmentation, e.g. Frangi's vessel enhancement with
subsequent thresholding are capable of segmenting medium to large vessels but
often fail to segment small vessels. The sensitivity of these methods to small
vessels can be increased by extensive parameter tuning or by manual
corrections, albeit making them time-consuming, laborious, and not feasible for
larger datasets. This paper proposes a deep learning architecture to
automatically segment small vessels in 7 Tesla 3D Time-of-Flight (ToF) Magnetic
Resonance Angiography (MRA) data. The algorithm was trained and evaluated on a
small imperfect semi-automatically segmented dataset of only 11 subjects; using
six for training, two for validation and three for testing. Deep learning model
based on U-Net Multi-Scale Supervision was trained using the training subset
and were made equivariant to elastic deformations in a self-supervised manner
using deformation-aware learning to improve the generalisation performance. The
proposed technique was evaluated quantitatively and qualitatively against the
test set and achieved a dice score of 80.440.83. Furthermore, the result
of the proposed method was compared against a selected manually segmented
region (62.07 resultant dice) and has shown a considerable improvement (18.98%)
with deformation-aware learning
Voxel-wise classification for porosity investigation of additive manufactured parts with 3D unsupervised and (deeply) supervised neural networks
Additive Manufacturing (AM) has emerged as a manufacturing process that
allows the direct production of samples from digital models. To ensure that
quality standards are met in all manufactured samples of a batch, X-ray
computed tomography (X-CT) is often used combined with automated anomaly
detection. For the latter, deep learning (DL) anomaly detection techniques are
increasingly, as they can be trained to be robust to the material being
analysed and resilient towards poor image quality. Unfortunately, most recent
and popular DL models have been developed for 2D image processing, thereby
disregarding valuable volumetric information.
This study revisits recent supervised (UNet, UNet++, UNet 3+, MSS-UNet) and
unsupervised (VAE, ceVAE, gmVAE, vqVAE) DL models for porosity analysis of AM
samples from X-CT images and extends them to accept 3D input data with a
3D-patch pipeline for lower computational requirements, improved efficiency and
generalisability. The supervised models were trained using the Focal Tversky
loss to address class imbalance that arises from the low porosity in the
training datasets. The output of the unsupervised models is post-processed to
reduce misclassifications caused by their inability to adequately represent the
object surface. The findings were cross-validated in a 5-fold fashion and
include: a performance benchmark of the DL models, an evaluation of the
post-processing algorithm, an evaluation of the effect of training supervised
models with the output of unsupervised models. In a final performance benchmark
on a test set with poor image quality, the best performing supervised model was
MSS-UNet with an average precision of 0.808 0.013, while the best
unsupervised model was the post-processed ceVAE with 0.935 0.001. The
VAE/ceVAE models demonstrated superior capabilities, particularly when
leveraging post-processing techniques
Classification of brain tumours in MR images using deep spatiospatial models
A brain tumour is a mass or cluster of abnormal cells in the brain, which has the possibility of becoming life-threatening because of its ability to invade neighbouring tissues and also form metastases. An accurate diagnosis is essential for successful treatment planning, and magnetic resonance imaging is the principal imaging modality for diagnosing brain tumours and their extent. Deep Learning methods in computer vision applications have shown significant improvement in recent years, most of which can be credited to the fact that a sizeable amount of data is available to train models, and the improvements in the model architectures yield better approximations in a supervised setting. Classifying tumours using such deep learning methods has made significant progress with the availability of open datasets with reliable annotations. Typically those methods are either 3D models, which use 3D volumetric MRIs or even 2D models considering each slice separately. However, by treating one spatial dimension separately or by considering the slices as a sequence of images over time, spatiotemporal models can be employed as "spatiospatial" models for this task. These models have the capabilities of learning specific spatial and temporal relationships while reducing computational costs. This paper uses two spatiotemporal models, ResNet (2+1)D and ResNet Mixed Convolution, to classify different types of brain tumours. It was observed that both these models performed superior to the pure 3D convolutional model, ResNet18. Furthermore, it was also observed that pre-training the models on a different, even unrelated dataset before training them for the task of tumour classification improves the performance. Finally, Pre-trained ResNet Mixed Convolution was observed to be the best model in these experiments, achieving a macro F1-score of 0.9345 and a test accuracy of 96.98%, while at the same time being the model with the least computational cost
Weakly-supervised segmentation using inherently-explainable classification models and their application to brain tumour classification
Deep learning models have shown their potential for several applications.
However, most of the models are opaque and difficult to trust due to their
complex reasoning - commonly known as the black-box problem. Some fields, such
as medicine, require a high degree of transparency to accept and adopt such
technologies. Consequently, creating explainable/interpretable models or
applying post-hoc methods on classifiers to build trust in deep learning models
are required. Moreover, deep learning methods can be used for segmentation
tasks, which typically require hard-to-obtain, time-consuming
manually-annotated segmentation labels for training. This paper introduces
three inherently-explainable classifiers to tackle both of these problems as
one. The localisation heatmaps provided by the networks -- representing the
models' focus areas and being used in classification decision-making -- can be
directly interpreted, without requiring any post-hoc methods to derive
information for model explanation. The models are trained by using the input
image and only the classification labels as ground-truth in a supervised
fashion - without using any information about the location of the region of
interest (i.e. the segmentation labels), making the segmentation training of
the models weakly-supervised through classification labels. The final
segmentation is obtained by thresholding these heatmaps. The models were
employed for the task of multi-class brain tumour classification using two
different datasets, resulting in the best F1-score of 0.93 for the supervised
classification task while securing a median Dice score of 0.670.08 for the
weakly-supervised segmentation task. Furthermore, the obtained accuracy on a
subset of tumour-only images outperformed the state-of-the-art glioma tumour
grading binary classifiers with the best model achieving 98.7\% accuracy